Goto

Collaborating Authors

 training label




Self-AdaptiveTraining: beyondEmpiricalRisk Minimization

Neural Information Processing Systems

This problem is important to robustly learning from data that are corrupted by,e.g., random noise and adversarial examples. The standard empirical risk minimization (ERM) for such data, however, may easily overfit noise and thus suffers from sub-optimal performance. In this paper, we observe that model predictions can substantially benefit the training process: self-adaptive training significantly mitigates the overfitting issue and improves generalization over ERM under both random and adversarial noise.





UnderstandingProgrammaticWeakSupervision viaSource-awareInfluenceFunction

Neural Information Processing Systems

Toachievethis, webuildonInfluenceFunction(IF)andproposesource-awareIF 2,whichleverages the generation process of the probabilistic labels to decompose the end model's training objective and then calculate the influence associated with each (data, source, class)tuple.